exp 1
- Education > Educational Setting > Online (0.41)
- Energy (0.34)
- Asia > Middle East > Jordan (0.05)
- North America > United States > California > Alameda County > Berkeley (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Asia > China (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Supervised Learning > Representation Of Examples (0.40)
A Regularizing Optimal Transport with f-Divergences Name f (v) f
The primal and dual are related by the Lagrangian L (,',), L ( ',,)= E We proceed to proofs of the theorems stated in Section 4. Assumption NTK, the regularization parameter, and it may also depend indirectly on the bound R . Theorem 4.2 follows immediately from Lemmas B.1 and B.2. Theorem The following result follows from Proposition E.4 and E.5 of of Luise et al. Interestingly, the rate of estimation of the Sinkhorn plan breaks the curse of dimensionality. B.2 Log-concavity of Sinkhorn Factor The optimal entropy regularized Sinkhorn plan is given by The optimal potentials satisfy fixed point equations. Using this result, one can prove the following lemma.
- Asia > China > Hong Kong (0.05)
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- North America > United States > California > Alameda County > Berkeley (0.14)
- Asia > China > Beijing > Beijing (0.04)
- Asia > China > Shanghai > Shanghai (0.04)
- (2 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (0.69)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.48)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.46)
Escaping from saddle points on Riemannian manifolds
Yue Sun, Nicolas Flammarion, Maryam Fazel
Finding the global minimum to Eq. (1) is in general NP-hard; our goal is to find an approximate second order stationary point with first order optimization methods. We are interested in first-order methods because they are extremely prevalent in machine learning, partly because computing Hessians is often too costly.
- Asia > Middle East > Jordan (0.05)
- North America > United States > Washington > King County > Seattle (0.04)
- North America > United States > Rhode Island > Providence County > Providence (0.04)
- (2 more...)
Supplemental to Shape and Structure Preserving Differential Privacy 1 Proof of Lemma 1
The equality is due to the fact that the parallel transport map is an isometry between tangent spaces and fixes the origin; the first inequality follows from the reverse triangle inequality, while the last follows from the upper bound on the Hessian of U in (2). U ( x,D) is the zero gradient vector field under the isometric parallel transport. Simulations pertaining to the sphere and Kendall shape space are done on a desktop computer with an Intel Xeon processor at 3.60GHz with 31.9 Simulations pertaining to symmetric positive-definite matrices were performed on the Pennsylvania State University's Institute for Computational and Data Sciences' Roar supercomputer. All simulations are done in Matlab.
- Information Technology > Artificial Intelligence > Machine Learning (0.47)
- Information Technology > Hardware (0.34)
- Europe > United Kingdom > England > Nottinghamshire > Nottingham (0.14)
- North America > United States > Pennsylvania (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (2 more...)